59 research outputs found

    Convergence Analysis of Ensemble Kalman Inversion: The Linear, Noisy Case

    Get PDF
    We present an analysis of ensemble Kalman inversion, based on the continuous time limit of the algorithm. The analysis of the dynamical behaviour of the ensemble allows us to establish well-posedness and convergence results for a fixed ensemble size. We will build on the results presented in [26] and generalise them to the case of noisy observational data, in particular the influence of the noise on the convergence will be investigated, both theoretically and numerically. We focus on linear inverse problems where a very complete theoretical analysis is possible

    Analysis of the ensemble Kalman filter for inverse problems

    Get PDF
    The ensemble Kalman filter (EnKF) is a widely used methodology for state estimation in partial, noisily observed dynamical systems, and for parameter estimation in inverse problems. Despite its widespread use in the geophysical sciences, and its gradual adoption in many other areas of application, analysis of the method is in its infancy. Furthermore, much of the existing analysis deals with the large ensemble limit, far from the regime in which the method is typically used. The goal of this paper is to analyze the method when applied to inverse problems with fixed ensemble size. A continuous-time limit is derived and the long-time behavior of the resulting dynamical system is studied. Most of the rigorous analysis is confined to the linear forward problem, where we demonstrate that the continuous time limit of the EnKF corresponds to a set of gradient flows for the data misfit in each ensemble member, coupled through a common pre-conditioner which is the empirical covariance matrix of the ensemble. Numerical results demonstrate that the conclusions of the analysis extend beyond the linear inverse problem setting. Numerical experiments are also given which demonstrate the benefits of various extensions of the basic methodology

    A strongly convergent numerical scheme from Ensemble Kalman inversion

    Get PDF
    The Ensemble Kalman methodology in an inverse problems setting can be viewed as an iterative scheme, which is a weakly tamed discretization scheme for a certain stochastic differential equation (SDE). Assuming a suitable approximation result, dynamical properties of the SDE can be rigorously pulled back via the discrete scheme to the original Ensemble Kalman inversion. The results of this paper make a step towards closing the gap of the missing approximation result by proving a strong convergence result in a simplified model of a scalar stochastic differential equation. We focus here on a toy model with similar properties than the one arising in the context of Ensemble Kalman filter. The proposed model can be interpreted as a single particle filter for a linear map and thus forms the basis for further analysis. The difficulty in the analysis arises from the formally derived limiting SDE with non-globally Lipschitz continuous nonlinearities both in the drift and in the diffusion. Here the standard Euler-Maruyama scheme might fail to provide a strongly convergent numerical scheme and taming is necessary. In contrast to the strong taming usually used, the method presented here provides a weaker form of taming. We present a strong convergence analysis by first proving convergence on a domain of high probability by using a cut-off or localisation, which then leads, combined with bounds on moments for both the SDE and the numerical scheme, by a bootstrapping argument to strong convergence

    On the Convergence of the Laplace Approximation and Noise-Level-Robustness of Laplace-based Monte Carlo Methods for Bayesian Inverse Problems

    Get PDF
    The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models. We are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose a computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate and with covariance matrix depending on the logposterior density. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and Laplace-based quasi-Monte-Carlo methods are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based importance sampling and plain quasi-Monte Carlo are not. Numerical experiments are presented to illustrate the theoretical findings.Comment: 50 pages, 11 figure

    Sampling Sup-Normalized Spectral Functions for Brown-Resnick Processes

    Full text link
    Sup-normalized spectral functions form building blocks of max-stable and Pareto processes and therefore play an important role in modeling spatial extremes. For one of the most popular examples, the Brown-Resnick process, simulation is not straightforward. In this paper, we generalize two approaches for simulation via Markov Chain Monte Carlo methods and rejection sampling by introducing new classes of proposal densities. In both cases, we provide an optimal choice of the proposal density with respect to sampling efficiency. The performance of the procedures is demonstrated in an example.Comment: 11 pages, 2 figure

    Ensemble Kalman filter for neural network based one-shot inversion

    Full text link
    We study the use of novel techniques arising in machine learning for inverse problems. Our approach replaces the complex forward model by a neural network, which is trained simultaneously in a one-shot sense when estimating the unknown parameters from data, i.e. the neural network is trained only for the unknown parameter. By establishing a link to the Bayesian approach to inverse problems, an algorithmic framework is developed which ensures the feasibility of the parameter estimate w.r. to the forward model. We propose an efficient, derivative-free optimization method based on variants of the ensemble Kalman inversion. Numerical experiments show that the ensemble Kalman filter for neural network based one-shot inversion is a promising direction combining optimization and machine learning techniques for inverse problems

    Quantification of airfoil geometry-induced aerodynamic uncertainties - comparison of approaches

    Full text link
    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by 9 independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost

    Well Posedness and Convergence Analysis of the Ensemble Kalman Inversion

    Get PDF
    The ensemble Kalman inversion is widely used in practice to estimate unknown parameters from noisy measurement data. Its low computational costs, straightforward implementation, and non-intrusive nature makes the method appealing in various areas of application. We present a complete analysis of the ensemble Kalman inversion with perturbed observations for a fixed ensemble size when applied to linear inverse problems. The well-posedness and convergence results are based on the continuous time scaling limits of the method. The resulting coupled system of stochastic differential equations allows to derive estimates on the long-time behaviour and provides insights into the convergence properties of the ensemble Kalman inversion. We view the method as a derivative free optimization method for the least-squares misfit functional, which opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks

    Ensemble-based gradient inference for particle methods in optimization and sampling

    Full text link
    We propose an approach based on function evaluations and Bayesian inference to extract higher-order differential information of objective functions {from a given ensemble of particles}. Pointwise evaluation {V(xi)}i\{V(x^i)\}_i of some potential VV in an ensemble {xi}i\{x^i\}_i contains implicit information about first or higher order derivatives, which can be made explicit with little computational effort (ensemble-based gradient inference -- EGI). We suggest to use this information for the improvement of established ensemble-based numerical methods for optimization and sampling such as Consensus-based optimization and Langevin-based samplers. Numerical studies indicate that the augmented algorithms are often superior to their gradient-free variants, in particular the augmented methods help the ensembles to escape their initial domain, to explore multimodal, non-Gaussian settings and to speed up the collapse at the end of optimization dynamics.} The code for the numerical examples in this manuscript can be found in the paper's Github repository (https://github.com/MercuryBench/ensemble-based-gradient.git)

    Subsampling in ensemble Kalman inversion

    Full text link
    We consider the Ensemble Kalman Inversion which has been recently introduced as an efficient, gradient-free optimisation method to estimate unknown parameters in an inverse setting. In the case of large data sets, the Ensemble Kalman Inversion becomes computationally infeasible as the data misfit needs to be evaluated for each particle in each iteration. Here, randomised algorithms like stochastic gradient descent have been demonstrated to successfully overcome this issue by using only a random subset of the data in each iteration, so-called subsampling techniques. Based on a recent analysis of a continuous-time representation of stochastic gradient methods, we propose, analyse, and apply subsampling-techniques within Ensemble Kalman Inversion. Indeed, we propose two different subsampling techniques: either every particle observes the same data subset (single subsampling) or every particle observes a different data subset (batch subsampling)
    • …
    corecore